15 research outputs found

    A study of dependency features of spike trains through copulas

    Get PDF
    Simultaneous recordings from many neurons hide important information and the connections characterizing the network remain generally undiscovered despite the progresses of statistical and machine learning techniques. Discerning the presence of direct links between neuron from data is still a not completely solved problem. To enlarge the number of tools for detecting the underlying network structure, we propose here the use of copulas, pursuing on a research direction we started in [1]. Here, we adapt their use to distinguish different types of connections on a very simple network. Our proposal consists in choosing suitable random intervals in pairs of spike trains determining the shapes of their copulas. We show that this approach allows to detect different types of dependencies. We illustrate the features of the proposed method on synthetic data from suitably connected networks of two or three formal neurons directly connected or influenced by the surrounding network. We show how a smart choice of pairs of random times together with the use of empirical copulas allows to discern between direct and un-direct interactions

    Echo State Networks with Self-Normalizing Activations on the Hyper-Sphere

    Get PDF
    Among the various architectures of Recurrent Neural Networks, Echo State Networks (ESNs) emerged due to their simplified and inexpensive training procedure. These networks are known to be sensitive to the setting of hyper-parameters, which critically affect their behaviour. Results show that their performance is usually maximized in a narrow region of hyper-parameter space called edge of chaos. Finding such a region requires searching in hyper-parameter space in a sensible way: hyper-parameter configurations marginally outside such a region might yield networks exhibiting fully developed chaos, hence producing unreliable computations. The performance gain due to optimizing hyper-parameters can be studied by considering the memory--nonlinearity trade-off, i.e., the fact that increasing the nonlinear behavior of the network degrades its ability to remember past inputs, and vice-versa. In this paper, we propose a model of ESNs that eliminates critical dependence on hyper-parameters, resulting in networks that provably cannot enter a chaotic regime and, at the same time, denotes nonlinear behaviour in phase space characterised by a large memory of past inputs, comparable to the one of linear networks. Our contribution is supported by experiments corroborating our theoretical findings, showing that the proposed model displays dynamics that are rich-enough to approximate many common nonlinear systems used for benchmarking

    A characterization of the Edge of Criticality in Binary Echo State Networks

    Full text link
    Echo State Networks (ESNs) are simplified recurrent neural network models composed of a reservoir and a linear, trainable readout layer. The reservoir is tunable by some hyper-parameters that control the network behaviour. ESNs are known to be effective in solving tasks when configured on a region in (hyper-)parameter space called \emph{Edge of Criticality} (EoC), where the system is maximally sensitive to perturbations hence affecting its behaviour. In this paper, we propose binary ESNs, which are architecturally equivalent to standard ESNs but consider binary activation functions and binary recurrent weights. For these networks, we derive a closed-form expression for the EoC in the autonomous case and perform simulations in order to assess their behavior in the case of noisy neurons and in the presence of a signal. We propose a theoretical explanation for the fact that the variance of the input plays a major role in characterizing the EoC

    Learn to Synchronize, Synchronize to Learn

    Get PDF
    In recent years, the machine learning community has seen a continuous growing interest in research aimed at investigating dynamical aspects of both training procedures and perfected recurrent models. Of particular interest among recurrent neural networks we have the Reservoir Computing (RC) paradigm characterized by conceptual simplicity and fast training scheme. Yet, the guiding principles under which RC operates are only partially understood. In this work, we study the properties behind learning dynamical systems and propose a new guiding principle based on Generalized Synchronization (GS), granting to learn a generic task with RC architectures. We show that the well-known Echo State Property (ESP) implies and is implied by GS, so that theoretical results derived from the ESP still hold when GS does. However, by using GS one can profitably study the RC learning procedure by linking the reservoir dynamics with the readout training. Notably, this allows us to shed light on the interplay between the input encoding performed by the reservoir and the output produced by the readout optimized for the task at hand. In addition, we show that - as opposed to the ESP - satisfaction of the GS can be measured by means of the Mutual False Nearest Neighbors index, which makes effective to practitioners theoretical derivations

    Input-to-state representation in linear reservoirs dynamics

    Get PDF

    Input-to-State Representation in linear reservoirs dynamics

    Get PDF
    Reservoir computing is a popular approach to design recurrent neural networks, due to its training simplicity and approximation performance. The recurrent part of these networks is not trained (e.g., via gradient descent), making them appealing for analytical studies by a large community of researchers with backgrounds spanning from dynamical systems to neuroscience. However, even in the simple linear case, the working principle of these networks is not fully understood and their design is usually driven by heuristics. A novel analysis of the dynamics of such networks is proposed, which allows the investigator to express the state evolution using the controllability matrix. Such a matrix encodes salient characteristics of the network dynamics; in particular, its rank represents an input-indepedent measure of the memory capacity of the network. Using the proposed approach, it is possible to compare different reservoir architectures and explain why a cyclic topology achieves favourable results as verified by practitioners

    L'importanza dei tempi di osservazione nella formulazione dell'ipotesi ergodica

    Get PDF
    Questo elaborato tratta dell'ipotesi ergodica, problema centrale nell'ambito della giustificazione dei risultati della meccanica statistica, e dell'importanza che svolge in essa il tempo di osservazione. Dopo aver presentato varie formulazioni del problema ergodico, si esamina la questione dei tempi di ritorno e si mostra come il teorema di ricorrenza di Poincaré non sia in contraddizione con la possibilità del raggiungimento dell'equilibrio. Infine, l'analisi dell'apparente paradosso di Fermi-Pasta-Ulam e la discussione di alcune proposte di soluzione mostrano un'applicazione della trattazione astratta condotta precedentemente

    Learning dynamical systems using dynamical systems: the reservoir computing approach

    No full text
    Dynamical systems have been used to describe a vast range of phenomena, including physical sciences, biology, neurosciences, and economics just to name a few. The development of a mathematical theory for dynamical systems allowed researchers to create precise models of many phenomena, predicting their behaviors with great accuracy. For many challenges of dynamical systems, highly accurate models are notably hard to produce due to the enormous number of variables involved and the complexity of their interactions. Yet, in recent years the availability of large datasets has driven researchers to approach these complex systems with machine learning techniques. These techniques are valuable in settings where no model can be formulated explicitly, but not rarely the working principles of these models are obscure and their optimization is driven by heuristics. In this context, this work aims at advancing the field by “opening the black-box” of data-driven models developed for dynamical systems. We focus on Recurrent Neural Networks (RNNs), one of the most promising and yet less understood approaches. In particular, we concentrate on a specific neural architecture that goes under the name of Reservoir Computing (RC). We address three problems: (1) how the learning procedure of these models can be understood and improved, (2) how these systems encode a representation of the inputs they receive, and (3) how the dynamics of these systems affect their performance. We make use of various tools taken from the theory of dynamical systems to explain how we can better understand the working principles of RC in dynamical systems, aiming at developing new guiding principles to improve their design

    Unbiased choice of global clustering parameters for single-molecule localization microscopy

    No full text
    Abstract Single-molecule localization microscopy resolves objects below the diffraction limit of light via sparse, stochastic detection of target molecules. Single molecules appear as clustered detection events after image reconstruction. However, identification of clusters of localizations is often complicated by the spatial proximity of target molecules and by background noise. Clustering results of existing algorithms often depend on user-generated training data or user-selected parameters, which can lead to unintentional clustering errors. Here we suggest an unbiased algorithm (FINDER) based on adaptive global parameter selection and demonstrate that the algorithm is robust to noise inclusion and target molecule density. We benchmarked FINDER against the most common density based clustering algorithms in test scenarios based on experimental datasets. We show that FINDER can keep the number of false positive inclusions low while also maintaining a low number of false negative detections in densely populated regions

    SICE national survey: current status on the adoption of laparoscopic approach to the treatment of colorectal disease in italy

    No full text
    The real di usion of laparoscopy for the treatment of colorectal diseases in Italy is largely unknown. The main purpose of the present study is to investigate among surgeons dedicated to minimally invasive surgery, the volume of laparoscopic colorectal procedures, the type of operation performed in comparison to traditional approach, the indication for surgery (benign and malignant) and to evaluate the di erent types of technologies used. A structured questionnaire was developed in collaboration with an international market research institute and the survey was published online; invitation to participate to the survey was issued among the members of the Italian Society of Endoscopic Surgery (SICE). 211 surgeons working in 57 surgical departments in Italy ful lled and answered the online survey. A total of 6357 colorectal procedures were recorded during the year 2015 of which 4104 (64.1%) were performed using a minimally invasive approach. Colon and rectal cancer were the most common indications for laparoscopic approach (83.1%). Left colectomy was the operation most commonly performed (41.8%), while rectal resection accounted for 23.5% of the cases. Overall conversion rate was 5.9% (242/4104). Full HD standard technology was available and routinely used in all the responders’ centers. The proportion of colorectal resec- tions that are carried out laparoscopically in dedicated centers has now reached valuable levels with a low conversion ra
    corecore